2,612 research outputs found

    An EPTAS for machine scheduling with bag-constraints

    Full text link
    Machine scheduling is a fundamental optimization problem in computer science. The task of scheduling a set of jobs on a given number of machines and minimizing the makespan is well studied and among other results, we know that EPTAS's for machine scheduling on identical machines exist. Das and Wiese initiated the research on a generalization of makespan minimization, that includes so called bag-constraints. In this variation of machine scheduling the given set of jobs is partitioned into subsets, so called bags. Given this partition a schedule is only considered feasible when on any machine there is at most one job from each bag. Das and Wiese showed that this variant of machine scheduling admits a PTAS. We will improve on this result by giving the first EPTAS for the machine scheduling problem with bag-constraints. We achieve this result by using new insights on this problem and restrictions given by the bag-constraints. We show that, to gain an approximate solution, we can relax the bag-constraints and ignore some of the restrictions. Our EPTAS uses a new instance transformation that will allow us to schedule large and small jobs independently of each other for a majority of bags. We also show that it is sufficient to respect the bag-constraint only among a constant number of bags, when scheduling large jobs. With these observations our algorithm will allow for some conflicts when computing a schedule and we show how to repair the schedule in polynomial-time by swapping certain jobs around

    Closing the Gap for Makespan Scheduling via Sparsification Techniques

    Get PDF
    Makespan scheduling on identical machines is one of the most basic and fundamental packing problem studied in the discrete optimization literature. It asks for an assignment of n jobs to a set of m identical machines that minimizes the makespan. The problem is strongly NPhard, and thus we do not expect a (1 + epsilon)-approximation algorithm with a running time that depends polynomially on 1/epsilon. Furthermore, Chen et al. [Chen/JansenZhang, SODA\u2713] recently showed that a running time of 2^{1/epsilon}^{1-delta} + poly(n) for any delta > 0 would imply that the Exponential Time Hypothesis (ETH) fails. A long sequence of algorithms have been developed that try to obtain low dependencies on 1/epsilon, the better of which achieves a running time of 2^{~O(1/epsilon^{2})} + O(n*log(n)) [Jansen, SIAM J. Disc. Math. 2010]. In this paper we obtain an algorithm with a running time of 2^{~O(1/epsilon)} + O(n*log(n)), which is tight under ETH up to logarithmic factors on the exponent. Our main technical contribution is a new structural result on the configuration-IP. More precisely, we show the existence of a highly symmetric and sparse optimal solution, in which all but a constant number of machines are assigned a configuration with small support. This structure can then be exploited by integer programming techniques and enumeration. We believe that our structural result is of independent interest and should find applications to other settings. In particular, we show how the structure can be applied to the minimum makespan problem on related machines and to a larger class of objective functions on parallel machines. For all these cases we obtain an efficient PTAS with running time 2^{~O(1/epsilon)} + poly(n)

    Faster Algorithms for Integer Programs with Block Structure

    Get PDF
    We consider integer programming problems max{cTx:Ax=b,lxu,xZnt}\max \{ c^T x : \mathcal{A} x = b, l \leq x \leq u, x \in \mathbb{Z}^{nt}\} where A\mathcal{A} has a (recursive) block-structure generalizing "nn-fold integer programs" which recently received considerable attention in the literature. An nn-fold IP is an integer program where A\mathcal{A} consists of nn repetitions of submatrices AZr×tA \in \mathbb{Z}^{r \times t} on the top horizontal part and nn repetitions of a matrix BZs×tB \in \mathbb{Z}^{s \times t} on the diagonal below the top part. Instead of allowing only two types of block matrices, one for the horizontal line and one for the diagonal, we generalize the nn-fold setting to allow for arbitrary matrices in every block. We show that such an integer program can be solved in time n2t2ϕ(rsΔ)O(rs2+sr2)n^2 t^2 {\phi} \cdot (rs{\Delta})^{\mathcal{O}(rs^2+ sr^2)} (ignoring logarithmic factors). Here Δ{\Delta} is an upper bound on the largest absolute value of an entry of A\mathcal{A} and ϕ{\phi} is the largest binary encoding length of a coefficient of cc. This improves upon the previously best algorithm of Hemmecke, Onn and Romanchuk that runs in time n3t3ϕΔO(t2s)n^3t^3 {\phi} \cdot {\Delta}^{\mathcal{O}(t^2s)}. In particular, our algorithm is not exponential in the number tt of columns of AA and BB. Our algorithm is based on a new upper bound on the l1l_1-norm of an element of the "Graver basis" of an integer matrix and on a proximity bound between the LP and IP optimal solutions tailored for IPs with block structure. These new bounds rely on the "Steinitz Lemma". Furthermore, we extend our techniques to the recently introduced "tree-fold IPs", where we again present a more efficient algorithm in a generalized setting

    An Algorithmic Theory of Integer Programming

    Full text link
    We study the general integer programming problem where the number of variables nn is a variable part of the input. We consider two natural parameters of the constraint matrix AA: its numeric measure aa and its sparsity measure dd. We show that integer programming can be solved in time g(a,d)poly(n,L)g(a,d)\textrm{poly}(n,L), where gg is some computable function of the parameters aa and dd, and LL is the binary encoding length of the input. In particular, integer programming is fixed-parameter tractable parameterized by aa and dd, and is solvable in polynomial time for every fixed aa and dd. Our results also extend to nonlinear separable convex objective functions. Moreover, for linear objectives, we derive a strongly-polynomial algorithm, that is, with running time g(a,d)poly(n)g(a,d)\textrm{poly}(n), independent of the rest of the input data. We obtain these results by developing an algorithmic framework based on the idea of iterative augmentation: starting from an initial feasible solution, we show how to quickly find augmenting steps which rapidly converge to an optimum. A central notion in this framework is the Graver basis of the matrix AA, which constitutes a set of fundamental augmenting steps. The iterative augmentation idea is then enhanced via the use of other techniques such as new and improved bounds on the Graver basis, rapid solution of integer programs with bounded variables, proximity theorems and a new proximity-scaling algorithm, the notion of a reduced objective function, and others. As a consequence of our work, we advance the state of the art of solving block-structured integer programs. In particular, we develop near-linear time algorithms for nn-fold, tree-fold, and 22-stage stochastic integer programs. We also discuss some of the many applications of these classes.Comment: Revision 2: - strengthened dual treedepth lower bound - simplified proximity-scaling algorith

    Fully Dynamic Bin Packing Revisited

    Get PDF
    We consider the fully dynamic bin packing problem, where items arrive and depart in an online fashion and repacking of previously packed items is allowed. The goal is, of course, to minimize both the number of bins used as well as the amount of repacking. A recently introduced way of measuring the repacking costs at each timestep is the migration factor, defined as the total size of repacked items divided by the size of an arriving or departing item. Concerning the trade-off between number of bins and migration factor, if we wish to achieve an asymptotic competitive ration of 1+ϵ1 + \epsilon for the number of bins, a relatively simple argument proves a lower bound of Ω(1ϵ)\Omega(\frac{1}{\epsilon}) for the migration factor. We establish a nearly matching upper bound of O(1ϵ4log1ϵ)O(\frac{1}{\epsilon}^4 \log \frac{1}{\epsilon}) using a new dynamic rounding technique and new ideas to handle small items in a dynamic setting such that no amortization is needed. The running time of our algorithm is polynomial in the number of items nn and in 1ϵ\frac{1}{\epsilon}. The previous best trade-off was for an asymptotic competitive ratio of 54\frac{5}{4} for the bins (rather than 1+ϵ1+\epsilon) and needed an amortized number of O(logn)O(\log n) repackings (while in our scheme the number of repackings is independent of nn and non-amortized)

    Complexity Bounds for Block-IPs

    Get PDF
    We consider integer programs (IPs) with a certain block structure, called two-stage stochastic. A two-stage stochastic IP is an integer program of the form min{cTxAx=b,xu,xZs+nt}\min\{c^Tx \mid Ax=b,\, \ell\leq x\leq u,\, x\in \mathbb{Z}^{s + nt}\} where the constraint matrix AZrn×s+tnA\in \mathbb{Z}^{rn \times s+tn} consists of blocks A(i)Zr×sA^{(i)} \in \mathbb{Z}^{r\times s} on a vertical line and blocks B(i)Zr×tB^{(i)}\in \mathbb{Z}^{r\times t} on the diagonal line aside. We improve the bound for the Graver complexity of two-stage stochastic IPs. Our bound of 3O(ss(2rA+1)rs)3^{O(s^s(2r||A||_\infty+1)^{rs})} reduces the dependency from rs2rs^2 to rsrs and is asymptotically tight under the exponential time hypothesis in the case that r=1r=1. The improved Graver complexity bound stems from improved bounds on the intersection for a class of structurally rich integer cones. Our bound of 3O(dΔ)d3^{O(d\Delta)^d} for dimension dd and absolute entries bounded by Δ\Delta is independent of the number of intersected integer cones. We investigate special properties of this class, which is complemented by the fact that these properties do not hold for general integer cones. Moreover, we give structural characterizations of this class that admit their use for two-stage stochastic IPs
    corecore